Translation of modelling for public health impact

How should we evaluate modelling work for outbreak response?

Sebastian Funk & Katharine Sherratt

https://epiforecasts.io/slides/who_hub_20241008.html

“All models are wrong, but some are useful”

George Box

  • How do we know when models are useful?
  • Does it matter how wrong they are?

To answer these questions, we need to evaluate modelling work.

What and how to evaluate?

Some examples

Evaluation of predictive modelling

Forecasts vs. Scenarios

Image from: https://covid19scenariomodelinghub.org/

Evaluation of forecasts

Assess quality of models by how closely prediction matches reality

Sherratt et al., eLife, 2023

Evaluation of scenario projections

Assessing predictions requires matching scenarios to reality.

Howerton et al., Nature Communications, 2023

Utility could be independent of predictive ability

Saltelli, 2018

Evaluation of utility to policy makers

Assess utility by asking recipients of modelling advice?

Meltzer et al., MMWR Weekly Report, 2014

Frieden and Damon, Emerg Inf Dis, 2015

Evaluation of public health impact

  • Modelling can save lives, but it can also do harm. How do we tell one from the other?
  • In order to assess the public health impact of modelling, we need to qualify and quantify it.

Evaluation of the process

Sherratt et al., Wellcome Open Res, 2024

Kucharski et al., PLOS Biology, 2020

Summary / discussion points

  • Evaluation can relate to model correctness, process, impact, etc.
  • Correctness does not always mean greater/better impact (although there are good reasons to aim for quality and correctness)
  • Evaluating utility to decision makers is not the same as evaluating public health impacts of modelling
  • Any evaluation needs clarity on what is being evaluated and how, before the work is done

Ongoing activity in this space

Scoping review on evaluation of modelling work with Johanna Hanefeld, Emil Iftekhar, Julia Fitzner, Duaa Rao, Aleena Tanveer.